30 research outputs found

    Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Get PDF
    This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm

    Fine-Granularity Transmission Distortion Modeling for Video Packet Scheduling Over Mesh Networks

    Get PDF
    Digital Object Identifier 10.1109/TMM.2009.2036290Packet scheduling is a critical component in multi-session video streaming over mesh networks. Different video packets have different levels of contribution to the overall video presentation quality at the receiver side. In this work, we develop a fine-granularity transmission distortion model for the encoder to predict the quality degradation of decoded videos caused by lost video packets. Based on this packet-level transmission distortion model, we propose a content-and-deadline-aware scheduling (CDAS) scheme for multi-session video streaming over multi-hop mesh networks, where content priority, queuing delays, and dynamic network transmission conditions are jointly considered for each video packet. Our extensive experimental results demonstrate that the proposed transmission distortion model and the CDAS scheme significantly improve the performance of multi-session video streaming over mesh networks

    Association of the low-density lipoprotein cholesterol/high-density lipoprotein cholesterol ratio and concentrations of plasma lipids with high-density lipoprotein subclass distribution in the Chinese population

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To evaluate the relationship between the low-density lipoprotein cholesterol (LDL-C)/high-density lipoprotein cholesterol (HDL-C) ratio and HDL subclass distribution and to further examine and discuss the potential impact of LDL-C and HDL-C together with TG on HDL subclass metabolism.</p> <p>Results</p> <p>Small-sized preβ<sub>1</sub>-HDL, HDL<sub>3b </sub>and HDL<sub>3a </sub>increased significantly while large-sized HDL<sub>2a </sub>and HDL<sub>2b </sub>decreased significantly as the LDL-C/HDL-C ratio increased. The subjects in low HDL-C level (< 1.03 mmol/L) who had an elevation of the LDL-C/HDL-C ratio and a reduction of HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL regardless of an undesirable or high LDL-C level. At desirable LDL-C levels (< 3.34 mmol/L), the HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL ratio was 5.4 for the subjects with a high HDL-C concentration (≥ 1.55 mmol/L); however, at high LDL-C levels (≥ 3.36 mmol/L), the ratio of LDL-C/HDL-C was 2.8 in subjects, and an extremely low HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL value although with high HDL-C concentration.</p> <p>Conclusion</p> <p>With increase of the LDL-C/HDL-C ratio, there was a general shift toward smaller-sized HDL particles, which implied that the maturation process of HDL was blocked. High HDL-C concentrations can regulate the HDL subclass distribution at desirable and borderline LDL-C levels but cannot counteract the influence of high LDL-C levels on HDL subclass distribution.</p

    Energy-Efficient Control Adaptation with Safety Guarantees for Learning-Enabled Cyber-Physical Systems

    Get PDF
    Neural networks have been increasingly applied for control in learning-enabled cyber-physical systems (LE-CPSs) and demonstrated great promises in improving system performance and efficiency, as well as reducing the need for complex physical models. However, the lack of safety guarantees for such neural network based controllers has significantly impeded their adoption in safety-critical CPSs. In this work, we propose a controller adaptation approach that automatically switches among multiple controllers, including neural network controllers, to guarantee system safety and improve energy efficiency. Our approach includes two key components based on formal methods and machine learning. First, we approximate each controller with a Bernstein-polynomial based hybrid system model under bounded disturbance, and compute a safe invariant set for each controller based on its corresponding hybrid system. Intuitively, the invariant set of a controller defines the state space where the system can always remain safe under its control. The union of the controllers' invariants sets then define a safe adaptation space that is larger than (or equal to) that of each controller. Second, we develop a deep reinforcement learning method to learn a controller switching strategy for reducing the control/actuation energy cost, while with the help of a safety guard rule, ensuring that the system stays within the safe space. Experiments on a linear adaptive cruise control system and a non-linear Van der Pol's oscillator demonstrate the effectiveness of our approach on energy saving and safety enhancement

    31st Annual Meeting and Associated Programs of the Society for Immunotherapy of Cancer (SITC 2016) : part two

    Get PDF
    Background The immunological escape of tumors represents one of the main ob- stacles to the treatment of malignancies. The blockade of PD-1 or CTLA-4 receptors represented a milestone in the history of immunotherapy. However, immune checkpoint inhibitors seem to be effective in specific cohorts of patients. It has been proposed that their efficacy relies on the presence of an immunological response. Thus, we hypothesized that disruption of the PD-L1/PD-1 axis would synergize with our oncolytic vaccine platform PeptiCRAd. Methods We used murine B16OVA in vivo tumor models and flow cytometry analysis to investigate the immunological background. Results First, we found that high-burden B16OVA tumors were refractory to combination immunotherapy. However, with a more aggressive schedule, tumors with a lower burden were more susceptible to the combination of PeptiCRAd and PD-L1 blockade. The therapy signifi- cantly increased the median survival of mice (Fig. 7). Interestingly, the reduced growth of contralaterally injected B16F10 cells sug- gested the presence of a long lasting immunological memory also against non-targeted antigens. Concerning the functional state of tumor infiltrating lymphocytes (TILs), we found that all the immune therapies would enhance the percentage of activated (PD-1pos TIM- 3neg) T lymphocytes and reduce the amount of exhausted (PD-1pos TIM-3pos) cells compared to placebo. As expected, we found that PeptiCRAd monotherapy could increase the number of antigen spe- cific CD8+ T cells compared to other treatments. However, only the combination with PD-L1 blockade could significantly increase the ra- tio between activated and exhausted pentamer positive cells (p= 0.0058), suggesting that by disrupting the PD-1/PD-L1 axis we could decrease the amount of dysfunctional antigen specific T cells. We ob- served that the anatomical location deeply influenced the state of CD4+ and CD8+ T lymphocytes. In fact, TIM-3 expression was in- creased by 2 fold on TILs compared to splenic and lymphoid T cells. In the CD8+ compartment, the expression of PD-1 on the surface seemed to be restricted to the tumor micro-environment, while CD4 + T cells had a high expression of PD-1 also in lymphoid organs. Interestingly, we found that the levels of PD-1 were significantly higher on CD8+ T cells than on CD4+ T cells into the tumor micro- environment (p < 0.0001). Conclusions In conclusion, we demonstrated that the efficacy of immune check- point inhibitors might be strongly enhanced by their combination with cancer vaccines. PeptiCRAd was able to increase the number of antigen-specific T cells and PD-L1 blockade prevented their exhaus- tion, resulting in long-lasting immunological memory and increased median survival

    High Precision Detection of Salient Objects Based on Deep Convolutional Networks with Proper Combinations of Shallow and Deep Connections

    No full text
    In this paper, a high precision detection method of salient objects is presented based on deep convolutional networks with proper combinations of shallow and deep connections. In order to achieve better performance in the extraction of deep semantic features of salient objects, based on a symmetric encoder and decoder architecture, an upgrade of backbone networks is carried out with a transferable model on the ImageNet pre-trained ResNet50. Moreover, by introducing shallow and deep connections on multiple side outputs, feature maps generated from various layers of the deep neural network (DNN) model are well fused so as to describe salient objects from local and global aspects comprehensively. Afterwards, based on a holistically nested edge detector (HED) architecture, multiple fused side outputs with various sizes of receptive fields are integrated to form detection results of salient objects accordingly. A series of experiments and assessments on extensive benchmark datasets demonstrate the dominant performance of our DNN model for the detection of salient objects in accuracy, which has outperformed those of other published works

    Restoration of Partial Blurred Image Based on Blur Detection and Classification

    No full text
    A new restoration algorithm for partial blurred image which is based on blur detection and classification is proposed in this paper. Firstly, a new blur detection algorithm is proposed to detect the blurred regions in the partial blurred image. Then, a new blur classification algorithm is proposed to classify the blurred regions. Once the blur class of the blurred regions is confirmed, the structure of the blur kernels of the blurred regions is confirmed. Then, the blur kernel estimation methods are adopted to estimate the blur kernels. In the end, the blurred regions are restored using nonblind image deblurring algorithm and replace the blurred regions in the partial blurred image with the restored regions. The simulated experiment shows that the proposed algorithm performs well

    High-Precision Detection of Defects of Tire Texture Through X-ray Imaging Based on Local Inverse Difference Moment Features

    No full text
    Automatic defect detection is an important and challenging issue in the tire industrial quality control. As is well known, the production quality of tire is directly related to the vehicle running safety and passenger security. However, it is difficult to inspect the inner structure of tire on the surface. This paper proposes a high-precision detection of defects of tire texture image obtained by X-ray image sensor for tire non-destructive inspection. In this paper, the feature distribution generated by local inverse difference moment (LIDM) features is proposed to be an effective representation of tire X-ray texture image. Further, the defect feature map (DFM) may be constructed by computing the Hausdorff distance between the LIDM feature distributions of original tire image and each sliding image patch. Moreover, DFM may be enhanced to improve the robustness of defect detection algorithm by a background suppression. Finally, an effective defect detection algorithm is proposed to achieve the pixel-level detection of defects with high precision over the enhanced DFM. In addition, the defect detection algorithm is not only robust to the noise in the background, but also has a more powerful capability of handling different shapes of defects. To validate the performance of our proposed method, two kinds of experiments about the defect feature map and defect detection are conducted to demonstrate its good performance. Moreover, a series of comparative analyses demonstrate that the proposed algorithm can accurately detect the defects and outperforms other algorithms in terms of various quantitative metrics

    Closed-Loop Restoration Approach to Blurry Images Based on Machine Learning and Feedback Optimization

    No full text

    word combination kernel for text categorization

    No full text
    We proposed a novel kernel for text categorization. This kernel is an inner product in the feature space generated by all word combinations of specified length. A word combination is a collection of different words co-occurring in the same sentence. The word combination of length k is weighted by the k-th root of the product of the inverse document frequencies (IDF) of its words. A computationally simple and efficient algorithm was proposed to calculate this kernel. By restricting the words of a word combination to the same sentence and considering multi-word combinations, the word combination features can capture similarity at a more specific level than single words. By discarding word order, the word combination features are more compatible with the flexibility of natural language and the dimensionality this kernel can be reduced significantly compared to the word-sequence kernel. We conducted a series of experiments on the Reuters-21578 dataset and 20 Newsgroups dataset. This kernel consistently achieves better performance than the classical word kernel and word-sequence kernel on the two datasets. We also assessed the impact of word combination length on performance and compared the computing efficiency of this kernel to those of the word kernel and word-sequence kernel.We proposed a novel kernel for text categorization. This kernel is an inner product in the feature space generated by all word combinations of specified length. A word combination is a collection of different words co-occurring in the same sentence. The word combination of length k is weighted by the k-th root of the product of the inverse document frequencies (IDF) of its words. A computationally simple and efficient algorithm was proposed to calculate this kernel. By restricting the words of a word combination to the same sentence and considering multi-word combinations, the word combination features can capture similarity at a more specific level than single words. By discarding word order, the word combination features are more compatible with the flexibility of natural language and the dimensionality this kernel can be reduced significantly compared to the word-sequence kernel. We conducted a series of experiments on the Reuters-21578 dataset and 20 Newsgroups dataset. This kernel consistently achieves better performance than the classical word kernel and word-sequence kernel on the two datasets. We also assessed the impact of word combination length on performance and compared the computing efficiency of this kernel to those of the word kernel and word-sequence kernel
    corecore